6 research outputs found

    Towards a Better Understanding of the Local Attractor in Particle Swarm Optimization: Speed and Solution Quality

    Full text link
    Particle Swarm Optimization (PSO) is a popular nature-inspired meta-heuristic for solving continuous optimization problems. Although this technique is widely used, the understanding of the mechanisms that make swarms so successful is still limited. We present the first substantial experimental investigation of the influence of the local attractor on the quality of exploration and exploitation. We compare in detail classical PSO with the social-only variant where local attractors are ignored. To measure the exploration capabilities, we determine how frequently both variants return results in the neighborhood of the global optimum. We measure the quality of exploitation by considering only function values from runs that reached a search point sufficiently close to the global optimum and then comparing in how many digits such values still deviate from the global minimum value. It turns out that the local attractor significantly improves the exploration, but sometimes reduces the quality of the exploitation. As a compromise, we propose and evaluate a hybrid PSO which switches off its local attractors at a certain point in time. The effects mentioned can also be observed by measuring the potential of the swarm

    Tuning differential evolution for artificial neural networks

    No full text
    The efficacy of an optimization method often depends on the choosing of a num-ber of behavioural parameters. Research within this area has been focused on devising schemes for adapting the behavioural parameters during optimization, so as to alleviate the need for a practitioner to select the parameters manually. But these schemes usually introduce new behavioural parameters that must be tuned. This study takes a different approach in which finding behavioural parameters that yield good performance is con-sidered an optimization problem in its own right and can therefore be attempted solved by an overlaid optimization method. In this work, variants of the general purpose op-timization method known as Differential Evolution have their behavioural parameters tuned so as to work well in the optimization of an Artificial Neural Network. The re-sults show that DE variants using so-called adaptive parameters do not have a general performance advantage as previously believe

    Particle Swarm Optimization with Disagreements on Stagnation

    No full text

    Simplifying particle swarm optimization

    No full text
    The general purpose optimization method known as Particle Swarm Optimization (PSO) has received much attention in past years, with many attempts to find the variant that performs best on a wide variety of optimization problems. The focus of past research has been with making the PSO method more complex as this is frequently believed to increase its adaptability to other optimization problems. This study takes the opposite approach and simplifies the PSO method. To compare the efficacy of the original PSO and the simplified variant here, an easy technique is presented for efficiently tuning their behavioural parameters. The technique works by employing an overlaid meta-optimizer, which is capable of simultaneously tuning parameters with regard to multiple optimization problems, wheras previous approaches to meta-optimization have buned behavioural parameters to work well on just a single optimization problem. It is then found that not only the PSO method and its simplified variant have comparable performance for optimization a number of Artificial Neural Network problems, but also the simplified variant appears to offer a small improvement in some case
    corecore